17 research outputs found

    Mapping constrained optimization problems to quantum annealing with application to fault diagnosis

    Get PDF
    Current quantum annealing (QA) hardware suffers from practical limitations such as finite temperature, sparse connectivity, small qubit numbers, and control error. We propose new algorithms for mapping boolean constraint satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In particular we develop a new embedding algorithm for mapping a CSP onto a hardware Ising model with a fixed sparse set of interactions, and propose two new decomposition algorithms for solving problems too large to map directly into hardware. The mapping technique is locally-structured, as hardware compatible Ising models are generated for each problem constraint, and variables appearing in different constraints are chained together using ferromagnetic couplings. In contrast, global embedding techniques generate a hardware independent Ising model for all the constraints, and then use a minor-embedding algorithm to generate a hardware compatible Ising model. We give an example of a class of CSPs for which the scaling performance of D-Wave's QA hardware using the local mapping technique is significantly better than global embedding. We validate the approach by applying D-Wave's hardware to circuit-based fault-diagnosis. For circuits that embed directly, we find that the hardware is typically able to find all solutions from a min-fault diagnosis set of size N using 1000N samples, using an annealing rate that is 25 times faster than a leading SAT-based sampling method. Further, we apply decomposition algorithms to find min-cardinality faults for circuits that are up to 5 times larger than can be solved directly on current hardware.Comment: 22 pages, 4 figure

    Fast optical layer mesh protection using pre-cross-connected trails

    Full text link
    Conventional optical networks are based on SONET rings, but since rings are known to use bandwidth inefficiently, there has been much research into shared mesh protection, which promises significant bandwidth savings. Unfortunately, most shared mesh protection schemes cannot guarantee that failed traffic will be restored within the 50 ms timeframe that SONET standards specify. A notable exception is the p-cycle scheme of Grover and Stamatelakis. We argue, however, that p-cycles have certain limitations, e.g., there is no easy way to adapt p-cycles to a path-based protection scheme, and p-cycles seem more suited to static traffic than to dynamic traffic. In this paper we show that the key to fast restoration times is not a ring-like topology per se, but rather the ability to pre-cross-connect protection paths. This leads to the concept of a pre-cross-connected trail or PXT, which is a structure that is more flexible than rings and that adapts readily to both path-based and link-based schemes and to both static and dynamic traffic. The PXT protection scheme achieves fast restoration speeds, and our simulations, which have been carefully chosen using ideas from experimental design theory, show that the bandwidth efficiency of the PXT protection scheme is comparable to that of conventional shared mesh protection schemes.Comment: Article has appeared in IEEE/ACM Trans. Networkin

    Design of a Railway Scheduling Model for Dense Services

    Get PDF
    We address the problem of generating detailed conflict-free railway schedules for given sets of train lines and frequencies. To solve this problem for large railway networks, we propose a network decomposition into condensation and compensation zones. Condensation zones contain main station areas, where capacity is limited and trains are required to travel with maximum speed. They are connected by compensation zones, where traffic is less dense and time reserves can be introduced for increasing stability. In this paper, we focus on the scheduling problem in condensation zones. To gain structure in the schedule we enforce a time discretisation which reduces the problem size considerably and also the cognitive load of the dispatchers. The problem is formulated as an independent set problem in a conflict graph, which is then solved using a fixed-point iteration heuristic. Results show that even large-scale problems with dense timetables and large topologies can be solved quickl

    A General-Purpose Transferable Predictor for Neural Architecture Search

    Full text link
    Understanding and modelling the performance of neural architectures is key to Neural Architecture Search (NAS). Performance predictors have seen widespread use in low-cost NAS and achieve high ranking correlations between predicted and ground truth performance in several NAS benchmarks. However, existing predictors are often designed based on network encodings specific to a predefined search space and are therefore not generalizable to other search spaces or new architecture families. In this paper, we propose a general-purpose neural predictor for NAS that can transfer across search spaces, by representing any given candidate Convolutional Neural Network (CNN) with a Computation Graph (CG) that consists of primitive operators. We further combine our CG network representation with Contrastive Learning (CL) and propose a graph representation learning procedure that leverages the structural information of unlabeled architectures from multiple families to train CG embeddings for our performance predictor. Experimental results on NAS-Bench-101, 201 and 301 demonstrate the efficacy of our scheme as we achieve strong positive Spearman Rank Correlation Coefficient (SRCC) on every search space, outperforming several Zero-Cost Proxies, including Synflow and Jacov, which are also generalizable predictors across search spaces. Moreover, when using our proposed general-purpose predictor in an evolutionary neural architecture search algorithm, we can find high-performance architectures on NAS-Bench-101 and find a MobileNetV3 architecture that attains 79.2% top-1 accuracy on ImageNet.Comment: Accepted to SDM2023; version includes supplementary material; 12 Pages, 3 Figures, 6 Table

    GENNAPE: Towards Generalized Neural Architecture Performance Estimators

    Full text link
    Predicting neural architecture performance is a challenging task and is crucial to neural architecture design and search. Existing approaches either rely on neural performance predictors which are limited to modeling architectures in a predefined design space involving specific sets of operators and connection rules, and cannot generalize to unseen architectures, or resort to zero-cost proxies which are not always accurate. In this paper, we propose GENNAPE, a Generalized Neural Architecture Performance Estimator, which is pretrained on open neural architecture benchmarks, and aims to generalize to completely unseen architectures through combined innovations in network representation, contrastive pretraining, and fuzzy clustering-based predictor ensemble. Specifically, GENNAPE represents a given neural network as a Computation Graph (CG) of atomic operations which can model an arbitrary architecture. It first learns a graph encoder via Contrastive Learning to encourage network separation by topological features, and then trains multiple predictor heads, which are soft-aggregated according to the fuzzy membership of a neural network. Experiments show that GENNAPE pretrained on NAS-Bench-101 can achieve superior transferability to 5 different public neural network benchmarks, including NAS-Bench-201, NAS-Bench-301, MobileNet and ResNet families under no or minimum fine-tuning. We further introduce 3 challenging newly labelled neural network benchmarks: HiAML, Inception and Two-Path, which can concentrate in narrow accuracy ranges. Extensive experiments show that GENNAPE can correctly discern high-performance architectures in these families. Finally, when paired with a search algorithm, GENNAPE can find architectures that improve accuracy while reducing FLOPs on three families.Comment: AAAI 2023 Oral Presentation; includes supplementary materials with more details on introduced benchmarks; 14 Pages, 6 Figures, 10 Table
    corecore